894 research outputs found

    Two-Way Automata Making Choices Only at the Endmarkers

    Full text link
    The question of the state-size cost for simulation of two-way nondeterministic automata (2NFAs) by two-way deterministic automata (2DFAs) was raised in 1978 and, despite many attempts, it is still open. Subsequently, the problem was attacked by restricting the power of 2DFAs (e.g., using a restricted input head movement) to the degree for which it was already possible to derive some exponential gaps between the weaker model and the standard 2NFAs. Here we use an opposite approach, increasing the power of 2DFAs to the degree for which it is still possible to obtain a subexponential conversion from the stronger model to the standard 2DFAs. In particular, it turns out that subexponential conversion is possible for two-way automata that make nondeterministic choices only when the input head scans one of the input tape endmarkers. However, there is no restriction on the input head movement. This implies that an exponential gap between 2NFAs and 2DFAs can be obtained only for unrestricted 2NFAs using capabilities beyond the proposed new model. As an additional bonus, conversion into a machine for the complement of the original language is polynomial in this model. The same holds for making such machines self-verifying, halting, or unambiguous. Finally, any superpolynomial lower bound for the simulation of such machines by standard 2DFAs would imply LNL. In the same way, the alternating version of these machines is related to L =? NL =? P, the classical computational complexity problems.Comment: 23 page

    Converting Nondeterministic Two-Way Automata into Small Deterministic Linear-Time Machines

    Full text link
    In 1978 Sakoda and Sipser raised the question of the cost, in terms of size of representations, of the transformation of two-way and one-way nondeterministic automata into equivalent two-way deterministic automata. Despite all the attempts, the question has been answered only for particular cases (e.g., restrictions of the class of simulated automata or of the class of simulating automata). However the problem remains open in the general case, the best-known upper bound being exponential. We present a new approach in which unrestricted nondeterministic finite automata are simulated by deterministic models extending two-way deterministic finite automata, paying a polynomial increase of size only. Indeed, we study the costs of the conversions of nondeterministic finite automata into some variants of one-tape deterministic Turing machines working in linear time, namely Hennie machines, weight-reducing Turing machines, and weight-reducing Hennie machines. All these variants are known to share the same computational power: they characterize the class of regular languages

    Isosurface extraction and interpretation on very large datasets in geophysics

    Get PDF
    International audienceIn order to deal with the heavy trend in size increase of volumetric datasets, research in isosurface extraction has focused in the past few years on related aspects such as surface simplification and load balanced parallel algorithms. We present in this paper a parallel, bloc-wise extension of the tandem algorithm [Attali et al. 2005], which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate bloc splitting and merging strategy and with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc) that will allow further improved exploration scenarios (small components removal or particularly oriented components selection for instance). For ease of implementation, we carefully describe a master and slave algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a 7000×1600×2000 geophysics dataset

    Loss of brain inter-frequency hubs in Alzheimer's disease

    Get PDF
    Alzheimer's disease (AD) causes alterations of brain network structure and function. The latter consists of connectivity changes between oscillatory processes at different frequency channels. We proposed a multi-layer network approach to analyze multiple-frequency brain networks inferred from magnetoencephalographic recordings during resting-states in AD subjects and age-matched controls. Main results showed that brain networks tend to facilitate information propagation across different frequencies, as measured by the multi-participation coefficient (MPC). However, regional connectivity in AD subjects was abnormally distributed across frequency bands as compared to controls, causing significant decreases of MPC. This effect was mainly localized in association areas and in the cingulate cortex, which acted, in the healthy group, as a true inter-frequency hub. MPC values significantly correlated with memory impairment of AD subjects, as measured by the total recall score. Most predictive regions belonged to components of the default-mode network that are typically affected by atrophy, metabolism disruption and amyloid-beta deposition. We evaluated the diagnostic power of the MPC and we showed that it led to increased classification accuracy (78.39%) and sensitivity (91.11%). These findings shed new light on the brain functional alterations underlying AD and provide analytical tools for identifying multi-frequency neural mechanisms of brain diseases.Comment: 27 pages, 6 figures, 3 tables, 3 supplementary figure

    Parallel extraction and simplification of large isosurfaces using an extended tandem algorithm

    Get PDF
    International audienceIn order to deal with the common trend in size increase of volumetric datasets, in the past few years research in isosurface extraction has focused on related aspects such as surface simplification and load-balanced parallel algorithms. We present a parallel, block-wise extension of the tandem algorithm by Attali et al., which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate block splitting and merging strategy along with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc.) that permits further improved exploration scenarios (small component removal or particularly oriented component selection for instance). For ease of implementation, we carefully describe a master and worker algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a geophysical dataset of size 7000 × 1600 × 2000

    Slimming Brick Cache Strategies for Seismic Horizon Propagation Algorithms

    Get PDF
    International audienceIn this paper, we propose a new bricked cache system suitable for a particular surface propagation algorithm : seismic horizon reconstruction. The application domain of this algorithm is the interpretation of seismic volumes used, for instance, by petroleum companies for oil prospecting. To ensure the optimality of such surface extraction, the algorithm must access randomly into the data volume. This lack of data locality imposes that the volume resides entirely in the main memory to reach decent performances. In case of volumes larger than the memory, we show that using a classical brick cache strategy can also produce good performances until a certain size. As the size of these volumes increases very quickly, and can now reach more than 200GB, we demonstrate that the performances of the classical algorithm are dramatically reduced when processed on standard workstation with a limited size of memory (currently 8GB to 16GB). In order to handle such large volumes, we introduce a new slimming brick cache strategy where bricks size evolves according to processed data : at each step of the algorithm, processed data could be removed from the cache. This new brick format allows to have a larger number of brick loaded in memory. We further improve the releasing mechanism by filling in priority the “holes” that appear in the surface during the propagation process. With this new cache strategy, horizons can be extracted into volumes that are up to 75 times the size of the available cache memory. We discuss the performances and results of this new approach applied on both synthetic and real data

    Asymétries directionnelles et fluctuantes en période périnatale : étude exploratoire dans les populations du passé

    Get PDF
    Les asymétries directionnelles (expression préférentielle d’un caractère d’un côté plutôt que de l’autre) et les asymétries fluctuantes (écarts mineurs, aléatoires et indépendants par rapport à la symétrie bilatérale) font l’objet d’un nombre croissant d’analyses dans des corpus actuels de sujets décédés en période périnatale, du fait de leurs implications respectives dans les questions de latéralisation préférentielle et de stabilité du développement. Ces axes de recherche sont toutefois rarement investis dans cette classe d’âge et dans les populations du passé. Notre étude s’est ainsi donné pour objectif de rechercher la présence ou l’absence de ces deux types d’asymétries dans un corpus archéologique, via une approche exploratoire adaptant les protocoles existants aux spécificités de ce matériel ostéologique. Réalisée sur un corpus de 116 individus issus de trois collections (nécropole 8B-51, île de Saï, Soudan, Kerma classique;  cimetière de Blandy-les-Tours, Bassin parisien, Xe-XIIe siècles ; cimetière de Provins, Bassin parisien, XIIIe-XVIIIe siècles), cette étude apporte des éléments de réflexion quant à la faisabilité méthodologique de ces analyses sur os sec, en permettant d’identifier des variables exprimant ces asymétries. Ces résultats préliminaires visent à engager la discussion quant à l’emploi de ces marqueurs biologiques pour les analyses ostéobiographiques de sujets immatures issus de populations du passé, notamment en tant qu’estimateur des perturbations du développement.Directional asymmetry (the preferential expression of a character on one side rather than the other) and fluctuating asymmetry (minor, random and independent deviations from bilateral symmetry) are increasingly investigated in modern corpuses of deceased perinates, since they are involved in assessing preferential lateralization and developmental stability, respectively. However, few studies have examined these issues in perinates from past populations. Our study was conducted to investigate the absence or presence of these two types of asymmetry in an archaeological corpus, using an exploratory approach with a view to adapting existing protocols to the specific features of this osteological material. The study was carried out on a corpus of 116 individuals from three collections (Necropolis 8B-51 from Saï Island, Sudan, Classic Kerma period, the Blandy-les-Tours cemetery in the Paris Basin, 10th-13th centuries, and the Provins cemetery in the Paris Basin, 12th-18th centuries). The investigation produced an assessment of the methodological feasibility of these analyses for skeletonized remains and allowed to identify several variables expressing their asymmetries. We expect these preliminary results to initiate discussions on the use of this type of biological for osteobiographical analyses of immature individuals from past populations, and particularly as an estimator of developmental disturbances

    Measurement of the cross-section and charge asymmetry of WW bosons produced in proton-proton collisions at s=8\sqrt{s}=8 TeV with the ATLAS detector

    Get PDF
    This paper presents measurements of the W+μ+νW^+ \rightarrow \mu^+\nu and WμνW^- \rightarrow \mu^-\nu cross-sections and the associated charge asymmetry as a function of the absolute pseudorapidity of the decay muon. The data were collected in proton--proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS experiment at the LHC and correspond to a total integrated luminosity of 20.2~\mbox{fb^{-1}}. The precision of the cross-section measurements varies between 0.8% to 1.5% as a function of the pseudorapidity, excluding the 1.9% uncertainty on the integrated luminosity. The charge asymmetry is measured with an uncertainty between 0.002 and 0.003. The results are compared with predictions based on next-to-next-to-leading-order calculations with various parton distribution functions and have the sensitivity to discriminate between them.Comment: 38 pages in total, author list starting page 22, 5 figures, 4 tables, submitted to EPJC. All figures including auxiliary figures are available at https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/STDM-2017-13
    corecore